Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #448 | Add/Update Dataloader alorese #541

Merged
merged 22 commits into from
Apr 29, 2024

Conversation

patrickamadeus
Copy link
Collaborator

Closes #448

Checkbox

  • Confirm that this PR is linked to the dataset issue.
  • Create the dataloader script seacrowd/sea_datasets/{my_dataset}/{my_dataset}.py (please use only lowercase and underscore for dataset folder naming, as mentioned in dataset issue) and its __init__.py within {my_dataset} folder.
  • Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _LOCAL, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _SEACROWD_VERSION variables.
  • Implement _info(), _split_generators() and _generate_examples() in dataloader script.
  • Make sure that the BUILDER_CONFIGS class attribute is a list with at least one SEACrowdConfig for the source schema and one for a seacrowd schema.
  • Confirm dataloader script works with datasets.load_dataset function.
  • Confirm that your dataloader script passes the test suite run with python -m tests.test_seacrowd seacrowd/sea_datasets/<my_dataset>/<my_dataset>.py or python -m tests.test_seacrowd seacrowd/sea_datasets/<my_dataset>/<my_dataset>.py --subset_id {subset_name_without_source_or_seacrowd_suffix}.
  • If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files.

T2T

image

SPTEXT

image

SPTEXT_TRANS

image

Copy link
Collaborator

@ljvmiranda921 ljvmiranda921 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was only able to test the t2t subset (my PC's almost out of storage 😅 ), and it works. I'm ok with the implementation, just have a few comments. Can you please run make check_file=... just so that the formatting is consistent everywhere?

seacrowd/sea_datasets/alorese/alorese.py Show resolved Hide resolved
@@ -0,0 +1,710 @@
_URLS_DICT = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: how were you able to generate this dictionary? Do you think it's possible to automate this process instead instead of keeping this file?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ljvmiranda921 ! Based on the archive, here are the steps:

  1. I scraped through the pagination to grab all of the blob url that leads to the detail page
  2. From each detail page, I scraped all of the .wav blob url like this
  3. Now for the hard part, in order to match the .wav with the matching .xml (caption), I scraped it through this blob and find the mosts similar naming for each corresponding file with the .wav filename
  4. Sadly, the name does not always have the same file naming as the .wav file, so I have to recheck one by one. Not to mention there might be .wav that does not have any .xml

From 4, I reconsider to elicit my scraping script to the code since it might cause:

  1. any trouble if slight UI change happens
  2. slow process, pagination scraping of ~200 items, and it does not include the detail page scraping again + caption so the quickest is roughly ~200 * 3 = 600 URLs. Not to mention the time needed might vary more due to user's network speed difference.

Furthermore, I have gone through the discussion in the discord before implementing alorese cause this one is a little bit tricky. Got the approval to the such "bulky" method, but for the sake of better result. I can't really open my Discord now, will paste the discussion thread later on.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah no worries! Thanks for outlining the steps! I guess there's no need to have this be automated, what you did is OK already. Perhaps we should document that process somewhere? Maybe as a docstring in the alrose_url.py file? Just a short description to ensure that future folks can replicate where the URLs are coming from. :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With pleasure! Please check the newest commit 😄.

Copy link
Collaborator

@ljvmiranda921 ljvmiranda921 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Let's just wait for @sabilmakbar 's review

Copy link
Collaborator

@sabilmakbar sabilmakbar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

init review

seacrowd/sea_datasets/alorese/alorese.py Outdated Show resolved Hide resolved
seacrowd/sea_datasets/alorese/alorese.py Outdated Show resolved Hide resolved
@sabilmakbar
Copy link
Collaborator

wait I'm going to check it quickly, pardon for late response

Comment on lines 67 to 98
BUILDER_CONFIGS = [
SEACrowdConfig(
name=f"{_DATASETNAME}_{subset}_source",
version=datasets.Version(_SOURCE_VERSION),
description=f"{_DATASETNAME} source schema for {subset} subset",
schema="source",
subset_id=f"{_DATASETNAME}_{subset}",
)
for subset in SUBSETS
] + [
SEACrowdConfig(
name=f"{_DATASETNAME}_t2t_seacrowd_t2t",
version=datasets.Version(_SEACROWD_VERSION),
description=f"{_DATASETNAME} SEACrowd schema for t2t subset",
schema=f"seacrowd_t2t",
subset_id=f"{_DATASETNAME}_t2t",
),
SEACrowdConfig(
name=f"{_DATASETNAME}_sptext_seacrowd_sptext",
version=datasets.Version(_SEACROWD_VERSION),
description=f"{_DATASETNAME} SEACrowd schema for sptext subset",
schema=f"seacrowd_sptext",
subset_id=f"{_DATASETNAME}_sptext",
),
SEACrowdConfig(
name=f"{_DATASETNAME}_sptext_trans_seacrowd_sptext",
version=datasets.Version(_SEACROWD_VERSION),
description=f"{_DATASETNAME} SEACrowd schema for sptext_trans subset",
schema=f"seacrowd_sptext",
subset_id=f"{_DATASETNAME}_sptext_trans",
),
]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may I ask few qns here?

  1. overall, the schema we will use in here are T2T for translation of transcription of same audio from Indonesian to Alorese, and the SPText is the ASR version, right?

  2. Which language does the audio content has? Indonesian or Alorese? bcs when I looked the code, there's no clear way to tell which lang is available in the transcription (and it's crucial to correctly map the Audio to the correct Transcriptions)

  3. For the config name below:
    {_DATASETNAME} _sptext_seacrowd_sptext and {_DATASETNAME}_t2t_seacrowd_t2t, may I know why the naming isn't something like {_DATASETNAME}_seacrowd_sptext and {_DATASETNAME}_seacrowd_t2t? any justifications here?

  4. I saw in the source, the info on speaker_id went missing? I thought source config should include all columns and informations (and stitch them appropriately if the data scattered into different files and configs, like what you did in your code)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sabilmakbar , thanks for the review and questions!

  1. Yes
  2. The audio is in alorese, the lang available in transcription is denoted by either sptext for alorese or sptext_trans for indonesian. The mapping is done in this part
image
  1. Because of the subset naming, the testing code (and dataset naming format) that was constructed needs to be in <DATASET_NAME>_<SUBSET_NAME>_seacrowd_`. It's just the subset name happens to be same as the schema, so it might be confusing. Happy to change if it might be needed.

  2. Thanks for the input! Please review the latest commit as I have done the nitpick.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the prompt reply, @patrickamadeus!

The audio is in alorese, the lang available in transcription is denoted by either sptext for alorese or sptext_trans for indonesian. The mapping is done in this part

Idk whether we need to create a schema of alorese audio against Indonesian text for the SPText schema. my personal opinion is to remove it since it could be very misleading (ASR schema should provide a text which is an actual transcription as is, not the translated ones)

Because of the subset naming, the testing code (and dataset naming format) that was constructed needs to be in <DATASET_NAME>_<SUBSET_NAME>seacrowd`. It's just the subset name happens to be same as the schema, so it might be confusing. Happy to change if it might be needed.

If I understand this correctly, the subsets for this dataset are only the Alorese version and Indonesian version. The SPText and T2T don't fit properly to the definition of "subset" of a dataset, as only the schema is different.

wdyt? @ljvmiranda921 @holylovenia @SamuelCahyawijaya

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, hmm, does it mean we should have a separate data loaders for the SPText and the T2T versions? I don't have a particularly strong opinion on any approach.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, hmm, does it mean we should have a separate data loaders for the SPText and the T2T versions?

No, we can have it in a single dataloader; I think the subset and configuration naming should be modified slightly.

The audio is in alorese, the lang available in transcription is denoted by either sptext for alorese or sptext_trans for indonesian.

The text information provided in this dataset is sequential; i.e., for every audio file, there is a sequence of annotated texts with their start & end timestamps.

For the Alorese language of text and audio, this can be put in ASR schema (or even a sequential audio split to its annotated text if we want to refine it further).

However, I don't think we should create an ASR schema for the Alorese audio and translated Indonesian annotation since the audio and text are in different languages.

And for T2T of Alorese and Indonesian translation, the existing implementation is correct, just need to reconstruct the configs list.

Therefore, my proposed configs are:

  1. Source -- containing Alorese Audio, Alorese Annotation (and its timestamp), and Indonesian Annotation (and its timestamp too)
  2. SPText -- containing Alorese Audio & its Annotation (the annotation could be combined into single text per audio -- as previously implemented or recreated using new sequenced schema of text)
  3. T2T -- containing Alorese Annotation & translated Indonesian Annotation (we can leave this as full-text translation, not word-to-word or phrase-to-phrase translation)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @sabilmakbar's suggestion.

Btw, just confirming, do the audio recordings and the transcriptions match word-for-word, @patrickamadeus?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! Thanks for the suggestions @sabilmakbar !

Btw, just confirming, do the audio recordings and the transcriptions match word-for-word?

Yes! There are multiple timestamps to indicate when each word is spoken. @holylovenia , to be honest I haven't reviewed substantial sample from the data to determine whether it matched word-for-word or no, but as I listened to the first 10 seconds of 1 particular example, it perfectly matched.

If there is no further suggestion or comment, I will implement the change maximum this weekend, got a bunch of stuff to do first.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sabilmakbar ! Could you please check on the latest commit? I have done the revision 👍.

SPText -- containing Alorese Audio & its Annotation (the annotation could be combined into single text per audio -- as previously implemented or recreated using new sequenced schema of text)

For this one, I went on using the previous implementation first.

)
for subset in SUBSETS
] + [
BUILDER_CONFIGS = [SEACrowdConfig(name=f"{_DATASETNAME}_source", version=datasets.Version(_SOURCE_VERSION), description=f"{_DATASETNAME} source schema", schema="source", subset_id=f"{_DATASETNAME}",)] + [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we fix this formatting? :D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bueno @sabilmakbar ! it's done, sorry I forgot to delete the old ] bracket

@sabilmakbar
Copy link
Collaborator

Hi @patrickamadeus, I already put in an updated review. Let both of us know if the suggestion has been addressed, prob both me and LJ need to re-run the whole checking once more to ensure it's already correct since this data loader is quite complex. Thx!

@sabilmakbar
Copy link
Collaborator

Hi @patrickamadeus, all looks good to me. Since LJ said he doesn't have much PC storage left (presumably), I'll proceed with the merge :) (I am able to download all data & subsets and tested it too).

How does it sound, @ljvmiranda921? If that's fine from your end, I'll approve and merge it

@ljvmiranda921
Copy link
Collaborator

^Yes please feel free to merge! 🙇

fix formatting on `yield` of `_generate_examples`
Copy link
Collaborator

@sabilmakbar sabilmakbar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@sabilmakbar sabilmakbar merged commit 42d6285 into SEACrowd:master Apr 29, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create dataset loader for Alorese Collection
5 participants